An Algorithm for Training Polynomial Networks
نویسندگان
چکیده
We consider deep neural networks, in which the output of each node is a quadratic function of its inputs. Similar to other deep architectures, these networks can compactly represent any function on a finite training set. The main goal of this paper is the derivation of an efficient layer-by-layer algorithm for training such networks, which we denote as the Basis Learner. The algorithm is a universal learner in the sense that the training error is guaranteed to decrease at every iteration, and can eventually reach zero under mild conditions. We present practical implementations of this algorithm, as well as preliminary experimental results. We also compare our deep architecture to other shallow architectures for learning polynomials, in particular kernel learning.
منابع مشابه
An efficient symmetric polynomial-based key establishment protocol for wireless sensor networks
An essential requirement for providing secure services in wireless sensor networks is the ability to establish pairwise keys among sensors. Due to resource constraints on the sensors, the key establishment scheme should not create significant overhead. To date, several key establishment schemes have been proposed. Some of these have appropriate connectivity and resistance against key exposure, ...
متن کاملClassification of ECG signals using Hermite functions and MLP neural networks
Classification of heart arrhythmia is an important step in developing devices for monitoring the health of individuals. This paper proposes a three module system for classification of electrocardiogram (ECG) beats. These modules are: denoising module, feature extraction module and a classification module. In the first module the stationary wavelet transform (SWF) is used for noise reduction of ...
متن کاملOptimizing Multiple Response Problem Using Artificial Neural Networks and Genetic Algorithm
This paper proposes a new intelligent approach for solving multi-response statistical optimization problems. In most real world optimization problems, we are encountered adjusting process variables to achieve optimal levels of output variables (response variables). Usual optimization methods often begin with estimating the relation function between the response variable and the control variab...
متن کاملApplying evolutionary optimization on the airfoil design
In this paper, lift and drag coefficients were numerically investigated using NUMECA software in a set of 4-digit NACA airfoils. Two metamodels based on the evolved group method of data handling (GMDH) type neural networks were then obtained for modeling both lift coefficient (CL) and drag coefficient (CD) with respect to the geometrical design parameters. After using such obtained polynomial n...
متن کاملHybrid interior point training of modular neural networks
Modular neural networks use a single gating neuron to select the outputs of a collection of agent neurons. Expectation-maximization (EM) algorithms provide one way of training modular neural networks to approximate non-linear functionals. This paper introduces a hybrid interior-point (HIP) algorithm for training modular networks. The HIP algorithm combines an interior-point linear programming (...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013